来自计算机断层扫描血管造影(CTA)的肾脏结构分割对于许多计算机辅助的肾脏癌治疗应用至关重要。肾脏解析〜(KIPA 2022)挑战旨在建立细粒度的多结构数据集并改善多个肾脏结构的分割。最近,U-NET主导了医疗图像分割。在KIPA挑战中,我们评估了几个U-NET变体,并选择了最终提交的最佳模型。
translated by 谷歌翻译
This report describes the winning solution to the Robust Vision Challenge (RVC) semantic segmentation track at ECCV 2022. Our method adopts the FAN-B-Hybrid model as the encoder and uses SegFormer as the segmentation framework. The model is trained on a composite dataset consisting of images from 9 datasets (ADE20K, Cityscapes, Mapillary Vistas, ScanNet, VIPER, WildDash 2, IDD, BDD, and COCO) with a simple dataset balancing strategy. All the original labels are projected to a 256-class unified label space, and the model is trained using a cross-entropy loss. Without significant hyperparameter tuning or any specific loss weighting, our solution ranks the first place on all the testing semantic segmentation benchmarks from multiple domains (ADE20K, Cityscapes, Mapillary Vistas, ScanNet, VIPER, and WildDash 2). The proposed method can serve as a strong baseline for the multi-domain segmentation task and benefit future works. Code will be available at https://github.com/lambert-x/RVC_Segmentation.
translated by 谷歌翻译
基于深度学习(DL)的医学图像分类和细分是诊断当前COVID 19的变异病毒的紧急研究主题。在肺的Covid-19计算机断层扫描(CT)图像中,地面玻璃浊度是需要专业诊断的最常见发现。基于这种情况,一些研究人员提出了相关的DL模型,这些模型可以在缺乏专业知识时取代诊所的专业诊断专家。但是,尽管DL方法在医学图像处理中具有惊人的性能,但有限的数据集可能是发展人类级别诊断准确性的挑战。此外,深度学习算法面临着将三个甚至多个维度分类的医学图像分类和分割的挑战,并保持高精度率。因此,有了确保高水平的准确性,我们的模型可以将患者的CT图像分为三种类型:正常,肺炎和covid。随后,两个数据集用于分割,其中一个数据集甚至只有有限的数据(20例)。我们的系统将分类模型和分割模型结合在一起,建立在RESNET50和3D U-NET算法的基础上。通过使用不同的数据集进行喂食,将根据分类结果进行感染区域的共vid图像分割。我们的模型通过3种类型的肺部病变分类达到94.52%的准确性:卷,肺炎和正常。对于将来的医疗用途,将模型嵌入医疗设施可能是一种有效的方法,可以协助或替代医生诊断,因此,在COVID-19情况下,更广泛的变异病毒问题也可以成功解决。
translated by 谷歌翻译
联合学习(FL)是一种使用跨设备分布的数据训练模型的技术。差异隐私(DP)为敏感数据提供了正式的隐私保证。我们的目标是在使用FL和DP保护隐私的同时,在计算受限设备上训练大型神经网络语言模型(NNLM)。但是,随着模型大小的增长,引入模型的DP噪声增加,这通常会阻止收敛。我们提出了部分嵌入更新(PEU),这是一种新颖的技术,可以通过降低有效载荷大小来降低噪声。此外,我们采用低级适应(LORA)和噪声对比估计(NCE)来减少计算受限设备上大型模型的记忆需求。这种技术的组合使得可以在保留准确性和隐私的同时训练大型唱机语言模型。
translated by 谷歌翻译
基于自我关注机制的顶部,视觉变压器最近在各种视觉任务上表现出显着的性能。虽然实现出色的性能,但它们仍然需要相对密集的计算成本,随着斑块的数量,自我关注头和变压器块增加而剧烈缩放。在本文中,我们争辩说,由于图像的变化大,因此它们对贴片之间的长距离依赖性建模的需要不同。为此,我们介绍了一个Adavit,一个自适应计算框架,学习在每次输入的基础上派生在整个骨干内的修补程序,自我注意力头和变压器块的使用策略,旨在提高视觉变压器的推理效率图像识别的最小精度降低。以端到端的方式与变压器骨架一起优化,轻量级决策网络连接到骨架上,以便在飞行中产生决定。关于ImageNet的广泛实验表明,与最先进的视觉变压器相比,我们的方法对效率的提高超过了2倍的效率,只有0.8%的准确性,实现了在不同的计算预算上的良好效率/准确性权衡权衡。我们进一步对学习使用政策进行了定量和定性分析,并对视觉变压器的冗余提供了更多的见解。
translated by 谷歌翻译
从密集的气体区域找到稀有气体区域的扩展流体动力学方程仍然是一个很大的挑战。成功的关键是获得准确的构成关系,用于应力和热量通量。最近的数据驱动模型提供了一种从数据学习本构关系的新现象学方法。这种模型使得复杂的本构关系使牛顿粘度和傅里叶的热传导定律扩展,通过更高衍生物的回归。然而,这些模型中的衍生物的选择是ad-hoc,而没有明确的物理解释。我们从理论上调查了数据驱动的模型在线性系统。我们认为这些模型相当于运输系数的非线性长度比例缩放规律。缩放法律的等价证明了物理合理性,并揭示了数据驱动模型的限制。我们的论点还指出,建模缩放法则明确可以避免数据驱动模型中的实际困难,如巨大数据的衍生估计和变量选择。我们进一步提出了一种基于缩放法的构成关系模型,并测试了瑞利散射光谱的计算。结果显示数据驱动的模型在第一次上的Chapman-Enskog扩展和时刻方法具有明显的优势。
translated by 谷歌翻译
Learning the underlying distribution of molecular graphs and generating high-fidelity samples is a fundamental research problem in drug discovery and material science. However, accurately modeling distribution and rapidly generating novel molecular graphs remain crucial and challenging goals. To accomplish these goals, we propose a novel Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation. Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through stochastic differential equations (SDE) and derive discrete graph structures as the condition for reverse generative processes. We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states. We further utilize ordinary differential equation (ODE) solvers for efficient graph sampling, based on the semi-linear structure of the probability flow ODE. Experiments on diverse datasets validate the effectiveness of our framework. Particularly, the proposed method still generates high-quality molecular graphs in a limited number of steps.
translated by 谷歌翻译
High-utility sequential pattern mining (HUSPM) has emerged as an important topic due to its wide application and considerable popularity. However, due to the combinatorial explosion of the search space when the HUSPM problem encounters a low utility threshold or large-scale data, it may be time-consuming and memory-costly to address the HUSPM problem. Several algorithms have been proposed for addressing this problem, but they still cost a lot in terms of running time and memory usage. In this paper, to further solve this problem efficiently, we design a compact structure called sequence projection (seqPro) and propose an efficient algorithm, namely discovering high-utility sequential patterns with the seqPro structure (HUSP-SP). HUSP-SP utilizes the compact seq-array to store the necessary information in a sequence database. The seqPro structure is designed to efficiently calculate candidate patterns' utilities and upper bound values. Furthermore, a new upper bound on utility, namely tighter reduced sequence utility (TRSU) and two pruning strategies in search space, are utilized to improve the mining performance of HUSP-SP. Experimental results on both synthetic and real-life datasets show that HUSP-SP can significantly outperform the state-of-the-art algorithms in terms of running time, memory usage, search space pruning efficiency, and scalability.
translated by 谷歌翻译
Graph Neural Networks (GNNs) have become increasingly important in recent years due to their state-of-the-art performance on many important downstream applications. Existing GNNs have mostly focused on learning a single node representation, despite that a node often exhibits polysemous behavior in different contexts. In this work, we develop a persona-based graph neural network framework called PersonaSAGE that learns multiple persona-based embeddings for each node in the graph. Such disentangled representations are more interpretable and useful than a single embedding. Furthermore, PersonaSAGE learns the appropriate set of persona embeddings for each node in the graph, and every node can have a different number of assigned persona embeddings. The framework is flexible enough and the general design helps in the wide applicability of the learned embeddings to suit the domain. We utilize publicly available benchmark datasets to evaluate our approach and against a variety of baselines. The experiments demonstrate the effectiveness of PersonaSAGE for a variety of important tasks including link prediction where we achieve an average gain of 15% while remaining competitive for node classification. Finally, we also demonstrate the utility of PersonaSAGE with a case study for personalized recommendation of different entity types in a data management platform.
translated by 谷歌翻译
With the development of natural language processing techniques(NLP), automatic diagnosis of eye diseases using ophthalmology electronic medical records (OEMR) has become possible. It aims to evaluate the condition of both eyes of a patient respectively, and we formulate it as a particular multi-label classification task in this paper. Although there are a few related studies in other diseases, automatic diagnosis of eye diseases exhibits unique characteristics. First, descriptions of both eyes are mixed up in OEMR documents, with both free text and templated asymptomatic descriptions, resulting in sparsity and clutter of information. Second, OEMR documents contain multiple parts of descriptions and have long document lengths. Third, it is critical to provide explainability to the disease diagnosis model. To overcome those challenges, we present an effective automatic eye disease diagnosis framework, NEEDED. In this framework, a preprocessing module is integrated to improve the density and quality of information. Then, we design a hierarchical transformer structure for learning the contextualized representations of each sentence in the OEMR document. For the diagnosis part, we propose an attention-based predictor that enables traceable diagnosis by obtaining disease-specific information. Experiments on the real dataset and comparison with several baseline models show the advantage and explainability of our framework.
translated by 谷歌翻译